AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Czech large language model

# Czech large language model

Csmpt7b
Apache-2.0
A large Czech language model based on continuous pre-training of the English MPT7b model, trained on 272 billion tokens of Czech corpus using a Czech tokenizer for pre-training on approximately 67 billion tokens of Czech large-scale corpus
Large Language Model Other
C
BUT-FIT
234
14
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase